Lagrange Interpolation
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Lagrange interpolating polynomial is the unique
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
of lowest
degree Degree may refer to: As a unit of measurement * Degree (angle), a unit of angle measurement ** Degree of geographical latitude ** Degree of geographical longitude * Degree symbol (°), a notation used in science, engineering, and mathematics ...
that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' and the y_j are called ''values''. The Lagrange polynomial L(x) has degree \leq k and assumes each value at the corresponding node, L(x_j) = y_j. Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by
Edward Waring Edward Waring (15 August 1798) was a British mathematician. He entered Magdalene College, Cambridge as a sizar and became Senior wrangler in 1757. He was elected a Fellow of Magdalene and in 1760 Lucasian Professor of Mathematics, holding the ...
. It is also an easy consequence of a formula published in 1783 by
Leonhard Euler Leonhard Euler ( , ; 15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, geographer, logician and engineer who founded the studies of graph theory and topology and made pioneering and influential discoveries in ma ...
. Uses of Lagrange polynomials include the Newton–Cotes method of
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
and Shamir's secret sharing scheme in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
. For equispaced nodes, Lagrange interpolation is susceptible to
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
of large oscillation.


Definition

Given a set of k + 1 nodes \, which must all be distinct, x_j \neq x_m for indices j \neq m, the Lagrange basis for polynomials of degree \leq k for those nodes is the set of polynomials \ each of degree k which take values \ell_j(x_m) = 0 if m \neq j and \ell_j(x_j) = 1. Using the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
this can be written \ell_j(x_m) = \delta_. Each basis polynomial can be explicitly described by the product: \begin \ell_j(x) &= \frac \cdots \frac \frac \cdots \frac \\ 0mu&= \prod_ \frac. \end Notice that the numerator \prod_(x - x_m) has k roots at the nodes \_ while the denominator \prod_(x_j - x_m) scales the resulting polynomial so that \ell_j(x_j) = 1. The Lagrange interpolating polynomial for those nodes through the corresponding ''values'' \ is the linear combination: L(x) = \sum_^ y_j \ell_j(x). Each basis polynomial has degree k, so the sum L(x) has degree \leq k, and it interpolates the data because L(x_m) = \sum_^ y_j \ell_j(x_m) = \sum_^ y_j \delta_ = y_m. The interpolating polynomial is unique. Proof: assume the polynomial M(x) of degree \leq k interpolates the data. Then the difference M(x) - L(x) is zero at k + 1 distinct nodes \. But the only polynomial of degree \leq k with more than k roots is the constant zero function, so M(x) - L(x) = 0, or M(x) = L(x).


Barycentric form

Each Lagrange basis polynomial \ell_j(x) can be rewritten as the product of three parts, a function \ell(x) = \prod_m (x - x_m) common to every basis polynomial, a node-specific constant w_j = \prod_(x_j - x_m)^ (called the ''barycentric weight''), and a part representing the displacement from x_j to x: \ell_j(x) = \ell(x) \dfrac By factoring \ell(x) out from the sum, we can write the Lagrange polynomial in the so-called ''first barycentric form'': :L(x) = \ell(x) \sum_^k \fracy_j. If the weights w_j have been pre-computed, this requires only \mathcal O(k) operations compared to \mathcal O(k^2) for evaluating each Lagrange basis polynomial \ell_j(x) individually. The barycentric interpolation formula can also easily be updated to incorporate a new node x_ by dividing each of the w_j, j=0 \dots k by (x_j - x_) and constructing the new w_ as above. For any x, \sum_^k \ell_j(x) = 1 because the constant function g(x) = 1 is the unique polynomial of degree \leq k interpolating the data \. We can thus further simplify the barycentric formula by dividing L(x) = L(x) / g(x)\colon :\begin L(x) &= \ell(x) \sum_^k \fracy_j \Bigg/ \ell(x) \sum_^k \frac \\ 0mu&= \sum_^k \fracy_j \Bigg/ \sum_^k \frac. \end This is called the ''second form'' or ''true form'' of the barycentric interpolation formula. This second form has advantages in computation cost and accuracy: it avoids evaluation of \ell(x); the work to compute each term in the denominator w_j/(x-x_j) has already been done in computing \bigl(w_j/(x-x_j)\bigr)y_j and so computing the sum in the denominator costs only k-1 addition operations; for evaluation points x which are close to one of the nodes x_j,
catastrophic cancelation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
would ordinarily be a problem for the value (x-x_j), however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result. Using this formula to evaluate L(x) at one of the nodes x_j will result in the indeterminate \infty y_j/\infty; computer implementations must replace such results by L(x_j) = y_j


A perspective from linear algebra

Solving an interpolation problem leads to a problem in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. ...
amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L(x) = \sum_^k x^j m_j, we must invert the
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_3 ...
(x_i)^j to solve L(x_i) = y_i for the coefficients m_j of L(x). By choosing a better basis, the Lagrange basis, L(x) = \sum_^k l_j(x) y_j, we merely get the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
, \delta_, which is its own inverse: the Lagrange basis automatically ''inverts'' the analog of the Vandermonde matrix. This construction is analogous to the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial.


Example

We wish to interpolate f(x) = x^2 over the domain 1 \leq x \leq 3 at the three nodes : \begin x_0 & = 1, & & & y_0 = f(x_0) & = 1, \\ mux_1 & = 2, & & & y_1 = f(x_1) & = 4, \\ mux_2 & = 3, & & & y_2 = f(x_2) & =9. \end The node polynomial \ell is :\ell(x) = (x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x - 6. The barycentric weights are :\begin w_0 &= (1-2)^(1-3)^ = \tfrac12, \\ muw_1 &= (2-1)^(2-3)^ = -1, \\ muw_2 &= (3-1)^(3-2)^ = \tfrac12. \end The Lagrange basis polynomials are :\begin \ell_0(x) &= \frac\cdot\frac = \tfrac12x^2 - \tfrac52x + 3, \\ mu\ell_1(x) &= \frac\cdot\frac = -x^2 + 4x - 3, \\ mu\ell_2(x) &= \frac\cdot\frac = \tfrac12x^2 - \tfrac32x + 1. \end The Lagrange interpolating polynomial is: : \begin L(x) &= 1\cdot\frac\cdot\frac + 4\cdot\frac\cdot\frac + 9\cdot\frac\cdot\frac \\ mu&= x^2. \end In (second) barycentric form, : L(x) = \frac = \frac .


Notes

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the
Vandermonde determinant In algebra, the Vandermonde polynomial of an ordered set of ''n'' variables X_1,\dots, X_n, named after Alexandre-Théophile Vandermonde, is the polynomial: :V_n = \prod_ (X_j-X_i). (Some sources use the opposite order (X_i-X_j), which changes the ...
. But, as can be seen from the construction, each time a node ''x''''k'' changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interp ...
s. Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
; the problem may be eliminated by choosing interpolation points at
Chebyshev nodes In numerical analysis, Chebyshev nodes are specific real algebraic numbers, namely the roots of the Chebyshev polynomials of the first kind. They are often used as nodes in polynomial interpolation because the resulting interpolation polynomial ...
. The Lagrange basis polynomials can be used in
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
to derive the
Newton–Cotes formulas In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called ''quadrature'') based on evaluating the integrand at ...
.


Remainder in Lagrange interpolation formula

When interpolating a given function ''f'' by a polynomial of degree at the nodes x_0,...,x_k we get the remainder R(x) = f(x) - L(x) which can be expressed as : R(x) = f _0,\ldots,x_k,x\ell(x) = \ell(x) \frac, \quad \quad x_0 < \xi < x_k, where f _0,\ldots,x_k,x/math> is the notation for
divided differences In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its o ...
. Alternatively, the remainder can be expressed as a contour integral in complex domain as :R(x) = \frac \int_C \frac dt = \frac \int_C \frac dt. The remainder can be bound as :, R(x), \leq \frac\max_ , f^(\xi), .


Derivation

Clearly, R(x) is zero at nodes. To find R(x) at a point x_p , define a new function F(x)=R(x)-\tilde(x)=f(x)-L(x)-\tilde(x) and choose \tilde(x)=C\cdot\prod_^k(x-x_i) where C is the constant we are required to determine for a given x_p. We choose C so that F(x) has k+2 zeroes (at all nodes and x_p) between x_0 and x_k (including endpoints). Assuming that f(x) is k+1-times differentiable, since L(x) and \tilde(x) are polynomials, and therefore, are infinitely differentiable, F(x) will be k+1-times differentiable. By Rolle's theorem, F^(x) has k+1 zeroes, F^(x) has k zeroes... F^ has 1 zero, say \xi,\, x_0<\xi. Explicitly writing F^(\xi): :F^(\xi)=f^(\xi)-L^(\xi)-\tilde^(\xi) :L^=0,\tilde^=C\cdot(k+1)! (Because the highest power of x in \tilde(x) is k+1) :0=f^(\xi)-C\cdot(k+1)! The equation can be rearranged as :C=\frac Since F(x_p) = 0 we have R(x_p)=\tilde(x_p) = \frac\prod_^k(x_p-x_i)


Derivatives

The dth derivatives of the Lagrange polynomial can be written as :L^(x) := \sum_^ y_j \ell_j^(x). For the first derivative, the coefficients are given by :\ell_j^(x) := \sum_^k \left \frac\prod_^k \frac \right/math> and for the second derivative :\ell^_j(x) := \sum_^ \frac \left \sum_^ \left( \frac\prod_^ \frac \right) \right. Through recursion, one can compute formulas for higher derivatives.


Finite fields

The Lagrange polynomial can also be computed in
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
s. This has applications in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
, such as in
Shamir's Secret Sharing Shamir's Secret Sharing (SSS) is an efficient secret sharing algorithm for distributing private information (the "secret") in such a way that no individual holds intelligible information about the secret. To achieve this, the secret is converted ...
scheme.


See also

*
Neville's algorithm In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given ''n'' + 1 points, there is a unique polynomial of degree ''≤ n'' which goes through the ...
* Newton form of the interpolation polynomial *
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate pol ...
*
Carlson's theorem In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not co ...
*
Lebesgue constant (interpolation) In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolation, interpolant of a Function (mathematics), function (at the given nodes) is in comparison with the best polynomial appro ...
* The Chebfun system *
Table of Newtonian series In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence a_n written in the form :f(s) = \sum_^\infty (-1)^n a_n = \sum_^\infty \frac a_n where : is the binomial coefficient and (s)_n is the falling factorial. N ...
*
Frobenius covariant In matrix theory, the Frobenius covariants of a square matrix are special polynomials of it, namely projection matrices ''A'i'' associated with the eigenvalues and eigenvectors of .Roger A. Horn and Charles R. Johnson (1991), ''Topics in Ma ...
*
Sylvester's formula In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function of a matrix as a polynomial in , in terms of the eigenvalues and eigenvectors ...
*
Finite difference coefficient In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward. Central finite difference This table contains the coefficients o ...
*
Hermite interpolation In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the s ...


References


External links

*
ALGLIB
has an implementations in C++ / C# / VBA / Pascal.
GSL
has a polynomial interpolation code in C
SO
has a MATLAB example that demonstrates the algorithm and recreates the first image in this article

at ttp://numericalmethods.eng.usf.edu Holistic Numerical Methods Institutebr>Lagrange interpolation polynomial
on www.math-linux.com *
Dynamic Lagrange interpolation with JSXGraph
* Numerical computing with functions
The Chebfun Project

Excel Worksheet Function for Bicubic Lagrange Interpolation

Lagrange polynomials in Python
{{authority control Interpolation Polynomials Articles containing proofs